I mentioned how vague it is; it is impossible for anyone to check what is exactly meant without going over literally everything Hutter ever wrote.
Hutter was much less ambiguously misrepresented/misquoted during the more recent debate with Holden Karnofsky (due to the latter’s interest in AIXI), so I am assuming, by the process of induction, that same happened here.
it is impossible for anyone to check what is exactly meant without going over literally everything Hutter ever wrote.
As it happens, I looked it up and did this ‘impossible’ task in a few seconds before I replied, because I expected the basis for your claim to be as lame as it is; here’s the third hit in Google for ‘marcus hutter ai risk’: “Artificial Intelligence: Overview”
Slide 67 includes some of the more conventional worries like technological unemployment and abuse of AI tools; more importantly, slide 68 includes a perfectly standard statement of Singularity risks, citing, as it happens, Moravec, Goode, Vinge, and Kurzweil; I’ll quote it in full (emphasis added):
What If We Do Succeed?
The success of AI might mean the end of the human race.
Artificial evolution is replaced by natural solution. AI systems will be our mind children (Moravec 2000)
Once a machine surpasses the intelligence of a human it can design even smarter machines (I.J.Good 1965).
This will lead to an intelligence explosion and a technological singularity at which the human era ends.
Prediction beyond this event horizon will be impossible (Vernor Vinge 1993)
Alternative 1: We keep the machines under control.
Alternative 2: Humans merge with or extend their brain by AI. Transhumanism (Ray Kurzweil 2000)
Let’s go back to what Carl said:
Marcus Hutter, Jurgen Schmidhuber, Kevin Warwick, and a number of other AI folk have written about the future of AI and risk of human extinction, etc.
Sure sounds like ‘Marcus Hutter...have written about the future of AI and risk of human extinction’.
Which in that case demonstrates awareness among the AI researchers of the risk, while at the same time not demonstrating that Hutter finds it particularly likely that this would happen (‘might’) or agrees with any specific alarmist rhetoric. I can’t know if that’s what Carl actually refers to. I do assure you that about every AI researcher has seen the Terminator.
I gave the Hutter quote I was thinking of upthread.
My aim was basically to distinguish between buying Eliezer’s claims and taking intelligence explosion and AI risk seriously, and to reject the idea that the ideas in question came out of nowhere. One can think AI risk is worth investigating without thinking much of Eliezer’s views or SI.
I agree that the cited authors would assign much lower odds of catastrophe given human-level AI than Eliezer. The same statement would be true of myself, or of most people at SI and FHI: Eliezer is at the far right tail on those views. Likewise for the probability that a small team assembled in the near future could build safe AGI first, but otherwise catastrophe would have ensued.
Well, I guess that’s fair enough. In the quote on the top, though, I am specifically criticizing the extreme view. At the end of the day, the entire raison d’etre for SI’s existence is the claim that without paying you the risk would be higher. The claim that you are somehow fairy unique. And there are many risks—for example, risk of lethal flu-like pandemic—which are much more clearly understood and where specific efforts have much more clearly predictable outcome of reducing the risk. Favouring a group of AI theorists but not other does not have clearly predictable outcome of reducing the risk.
(I am inclined to believe that the pandemic is under funded as it would primarily decimate the poorer countries, ending existence of entire cultures, whereas the ‘existential risk’ is a fancy phrase for a risk to the privileged)
Which in that case demonstrates awareness among the AI researchers of the risk, while at the same time not demonstrating that Hutter finds it particularly likely that this would happen (‘might’) or agrees with any specific alarmist rhetoric.
It need not demonstrate any such thing to fit Carl’s statement perfectly and give the lie to your claim that he was misrepresenting Hutter.
I do assure you that about every AI researcher has seen the Terminator.
Sure, hence the Hutter citation of “(Cameron 1984)”. Oh wait.
How is Hutter being misrepresented here?
I mentioned how vague it is; it is impossible for anyone to check what is exactly meant without going over literally everything Hutter ever wrote.
Hutter was much less ambiguously misrepresented/misquoted during the more recent debate with Holden Karnofsky (due to the latter’s interest in AIXI), so I am assuming, by the process of induction, that same happened here.
As it happens, I looked it up and did this ‘impossible’ task in a few seconds before I replied, because I expected the basis for your claim to be as lame as it is; here’s the third hit in Google for ‘marcus hutter ai risk’: “Artificial Intelligence: Overview”
Slide 67 includes some of the more conventional worries like technological unemployment and abuse of AI tools; more importantly, slide 68 includes a perfectly standard statement of Singularity risks, citing, as it happens, Moravec, Goode, Vinge, and Kurzweil; I’ll quote it in full (emphasis added):
Let’s go back to what Carl said:
Sure sounds like ‘Marcus Hutter...have written about the future of AI and risk of human extinction’.
Which in that case demonstrates awareness among the AI researchers of the risk, while at the same time not demonstrating that Hutter finds it particularly likely that this would happen (‘might’) or agrees with any specific alarmist rhetoric. I can’t know if that’s what Carl actually refers to. I do assure you that about every AI researcher has seen the Terminator.
I gave the Hutter quote I was thinking of upthread.
My aim was basically to distinguish between buying Eliezer’s claims and taking intelligence explosion and AI risk seriously, and to reject the idea that the ideas in question came out of nowhere. One can think AI risk is worth investigating without thinking much of Eliezer’s views or SI.
I agree that the cited authors would assign much lower odds of catastrophe given human-level AI than Eliezer. The same statement would be true of myself, or of most people at SI and FHI: Eliezer is at the far right tail on those views. Likewise for the probability that a small team assembled in the near future could build safe AGI first, but otherwise catastrophe would have ensued.
Well, I guess that’s fair enough. In the quote on the top, though, I am specifically criticizing the extreme view. At the end of the day, the entire raison d’etre for SI’s existence is the claim that without paying you the risk would be higher. The claim that you are somehow fairy unique. And there are many risks—for example, risk of lethal flu-like pandemic—which are much more clearly understood and where specific efforts have much more clearly predictable outcome of reducing the risk. Favouring a group of AI theorists but not other does not have clearly predictable outcome of reducing the risk.
(I am inclined to believe that the pandemic is under funded as it would primarily decimate the poorer countries, ending existence of entire cultures, whereas the ‘existential risk’ is a fancy phrase for a risk to the privileged)
It need not demonstrate any such thing to fit Carl’s statement perfectly and give the lie to your claim that he was misrepresenting Hutter.
Sure, hence the Hutter citation of “(Cameron 1984)”. Oh wait.